14 research outputs found

    Mapping the thematic evolution in Communication over the first two decades from the 21st century: A longitudinal approach

    Get PDF
    This study offers an overview of the thematic structure in Communication during the first two decades of the 21st century, 2001-2010 and 2011-2020. The included work mapped author keywords and keywords plus of citable articles published in the Journal Citation Reports-2019 edition. A longitudinal perspective was employed to visualize the thematic evolution. Four predominant thematic areas were evidenced in both periods: (1) Speech and Language, (2) Commercial Communication, (3) Health Communication, and (4) Organizational Communication. There were four topics reflecting the formation of substantial research areas during the second decade, such as (1) Science Communication, (2) Scholarly Publishing, (3) Mental Health and Interpersonal Relationships, and (4) Crime and Violence. In general, from the first to the second decade, the technological dimension ceased to be predominant, and instead, there was a more significant presence of themes that responded to a socio-psychological dimension

    The Use of Transfer Learning for Activity Recognition in Instances of Heterogeneous Sensing

    Get PDF
    Transfer learning is a growing field that can address the variability of activity recognition problems by reusing the knowledge from previous experiences to recognise activities from different conditions, resulting in the leveraging of resources such as training and labelling efforts. Although integrating ubiquitous sensing technology and transfer learning seem promising, there are some research opportunities that, if addressed, could accelerate the development of activity recognition. This paper presents TL-FmRADLs; a framework that converges the feature fusion strategy with a teacher/learner approach over the active learning technique to automatise the self-training process of the learner models. Evaluation TL-FmRADLs is conducted over InSync; an open access dataset introduced for the first time in this paper. Results show promising effects towards mitigating the insufficiency of labelled data available by enabling the learner model to outperform the teacher’s performance

    Portal Design for the Open Data Initiative: A Preliminary Study

    Get PDF
    The Open Data Initiative (ODI) has been previously proposed to facilitate the sharing of annotated datasets within the pervasive health care research community. This paper outlines the requirements for the ODI portal based on the ontological data model of the ODI and its typical usage scenarios. In the context of an action research framework, the paper outlines the ODI platform, the design of a prototype user interface for the purposes of initial evaluation and its technical review by third-party researchers (n = 3). The main findings from the technical review were found to be the need for a more flexible user interface to reflect the different experimental configurations in the research community, provision for describing dataset usage, and dissemination conditions. The technical review also identified the value of permitting datasets with variable quality, as noisy datasets are useful in the testing of activity recognition algorithms. Revisions to the ODI ontology and platform are proposed based on the findings from this study

    InSync

    Get PDF
    Version 1.0 ====================================== Abstract: ====================================== The InSync data set was collected at the Pervasive Computing lab at Ulster University. It consists of subjects performing activities of daily living (ADLs) in an atmosphere that mimics a real-life environment while data is collected using three different sensing technologies: inertial, image, and audio. The data set can be used to research human activity recognition algorithms to tackle problems on classification, transfer learning, data fusion, data segmentation, feature extraction, so on and so forth. Number of instances: ====================================== 16,959 (inertial data points) + 650 (thermal images) + 16,986 (audio files) Relevant information: ====================================== InSync contains 12 hours of data from ten subjects, consisting of 78 runs (times that a subject performed the scripted protocol). Sensor data from three different technologies (inertial, images and audio) captured the performance (not simulation) of the subjects performing ADLs. All the activities were annotated a posteriori using a video stream. ***** ACTIVITIES OF DAILY LIVING As the data set aimed at recording the subject's physical activity performance. The tasks consisted of ADLs and well-known scenarios. Three general scenarios were chosen, a bedroom-related scenario in which the subjects performed two of the ADLs, namely, personal hygiene and dressing, a breakfast-related scenario was chosen to embrace the ADL of feeding as it has extensively been used in literature, and free of obstacle scenario in which the subjects can walk alongside to demonstrate their transferring capabilities. The script was designed with nine high-level activities: Bedroom: (1) Napping (2) Wearing joggers (3) Combing hair (4) Brushing teeth Corridor: (5) Operating door Kitchen: (6) Drinking water (7) Eating cereal Livingroom: (8) Transporting (i.e. walking) (9) Resting (i.e. sitting in a chair) Details of the room's dimensions and sensor locations are available in the Relevant Papers. ***** SENSING TECHNOLOGY The deployed sensing technology included thirteen shimmer devices enabled with 3-axis accelerometers, four Matrix Voice ESP32 consisting of eight embedded microphones and four Thermal Vision Sensor (TVS). The sensing technology was placed as described next: Shimmers wore by the subject: - Right wrist - Left wrist - Lower back - Upper back - Right shoe Shimmers mounted on everyday items: - Comb - Toothbrush - Glass - Spoon - Jogger - Belt - Strap to mimic a watch - Strap to mimic smart shoe Matrix Voice ESP32 (one located in each room): - Bedroom - Corridor - Kitchen - Livingroom Thermal sensor (one located in each room): - Bedroom - Corridor - Kitchen - Livingroom Attribute information: ====================================== The data set comprises the readings of inertial sensors, thermal images, and audio files to recorded performed ADLs. There is a total of 60 attributes for the inertial data which includes the mean value and root-mean-square (RMS) from x, y, and z-axis. The thermal data consists of grayscale images in 32x32 pixels, and the audio data consists of 44.1 kHz waveform audio files. A list of videos of the experiment can be seen in the following links. Bedroom: (1) napping: https://youtu.be/IqWLKsgch6A (2) wearing joggers: https://youtu.be/FJBjO9C4Q4U (3) combing hair: https://youtu.be/bYKrKbVBNos (4) brushing teeth: https://youtu.be/wuVkrWlsmSs Corridor: (5) operating door: https://youtu.be/pWJjx3TH6Q4 Kitchen: (6) drinking water: https://youtu.be/wS9OBKK_LFY (7) eating cereal: https://youtu.be/nOK8TuyCXBA Livingroom: (8) transporting (i.e. walking): https://youtu.be/45MGsYS9cYg (9) resting (i.e. sitting in a chair): https://youtu.be/45MGsYS9cYg IMPORTANT: The videos previously provided were recorded using conventional webcams. The videos were used as ground truth; they were not used for training nor testing purposes. Note that the participant's identity has been considered by blurring their face. The speed of the videos varies as different sampling rates were used when recording the videos

    COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound

    Get PDF
    Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in two-dimensional fetal ultrasound images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix, and others) and brain anatomical structures (trans-thalamic, trans-cerebellum, trans-ventricular, and non-brain). Our proposed architecture relies on a transformer-based approach that leverages spatial and global features by using a newly designed residual cross-variance attention (R-XCA) block. This block introduces an advanced cross-covariance attention mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12, 400 images from 1,792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively
    corecore